语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
作为自治机器人的互动和导航在诸如房屋之类的真实环境中,可靠地识别和操纵铰接物体,例如门和橱柜是有用的。在对象铰接识别中许多先前的作品需要通过机器人或人类操纵物体。虽然最近的作品已经解决了从视觉观测的预测,但他们经常假设根据其运动约束的铰接部件移动的类别级运动模型或观察序列的先验知识。在这项工作中,我们提出了Formnet,是一种神经网络,该神经网络识别来自RGB-D图像和分段掩模的单帧对象部分的对象部分之间的铰接机制。从6个类别的149个铰接对象的100K合成图像培训网络培训。通过具有域随机化的光保护模拟器呈现合成图像。我们所提出的模型预测物体部件的运动残余流动,并且这些流量用于确定铰接类型和参数。该网络在训练有素的类别中的新对象实例上实现了82.5%的铰接式分类精度。实验还展示了该方法如何实现新颖类别的泛化,并且在没有微调的情况下应用于现实世界图像。
translated by 谷歌翻译
The NASA Astrophysics Data System (ADS) is an essential tool for researchers that allows them to explore the astronomy and astrophysics scientific literature, but it has yet to exploit recent advances in natural language processing. At ADASS 2021, we introduced astroBERT, a machine learning language model tailored to the text used in astronomy papers in ADS. In this work we: - announce the first public release of the astroBERT language model; - show how astroBERT improves over existing public language models on astrophysics specific tasks; - and detail how ADS plans to harness the unique structure of scientific papers, the citation graph and citation context, to further improve astroBERT.
translated by 谷歌翻译
The de facto standard of dynamic histogram binning for radiomic feature extraction leads to an elevated sensitivity to fluctuations in annotated regions. This may impact the majority of radiomic studies published recently and contribute to issues regarding poor reproducibility of radiomic-based machine learning that has led to significant efforts for data harmonization; however, we believe the issues highlighted here are comparatively neglected, but often remedied by choosing static binning. The field of radiomics has improved through the development of community standards and open-source libraries such as PyRadiomics. But differences in image acquisition, systematic differences between observers' annotations, and preprocessing steps still pose challenges. These can change the distribution of voxels altering extracted features and can be exacerbated with dynamic binning.
translated by 谷歌翻译
在整个计算科学中,越来越需要利用原始计算马力的持续改进,通过对蛮力的尺度锻炼的尺度增加,以增加网状元素数量的增加。例如,如果不考虑分子水平的相互作用,就不可能对纳米多孔介质的转运进行定量预测,即从紧密的页岩地层提取至关重要的碳氢化合物。同样,惯性限制融合模拟依赖于数值扩散来模拟分子效应,例如非本地转运和混合,而无需真正考虑分子相互作用。考虑到这两个不同的应用程序,我们开发了一种新颖的功能,该功能使用主动学习方法来优化局部细尺度模拟的使用来告知粗尺度流体动力学。我们的方法解决了三个挑战:预测连续性粗尺度轨迹,以推测执行新的精细分子动力学计算,动态地更新细度计算中的粗尺度,并量化神经网络模型中的不确定性。
translated by 谷歌翻译
抗微生物抗性(AMR)是日益增长的公共卫生威胁,估计每年造成超过1000万人死亡,在现状预测下,到2050年,全球经济损失了100万亿美元。这些损失主要是由于治疗失败的发病率和死亡率增加,医疗程序中的AMR感染以及归因于AMR的生活质量损失所致。已经提出了许多干预措施来控制AMR的发展并减轻其传播带来的风险。本文回顾了细菌AMR管理和控制的关键方面,这些方面可以利用人工智能,机器学习以及数学和统计建模等数据技术,这些领域在本世纪已经快速发展。尽管数据技术已成为生物医学研究的组成部分,但它们对AMR管理的影响仍然很小。我们概述了使用数据技术来打击AMR,详细介绍了四个互补类别的最新进展:监视,预防,诊断和治疗。我们在生物医学研究,临床实践和“一个健康”背景下使用数据技术提供了有关当前AMR控制方法的概述。我们讨论了数据技术的潜在影响和挑战在高收入和中等收入国家中面临的实施,并建议将这些技术更容易地整合到医疗保健和公共卫生中所需的具体行动,并建议使用具体的行动部门。
translated by 谷歌翻译
最近已经显示出基于脑网络的许多深神经网络架构,以复制在大脑中观察到的神经烧制模式。在没有大脑的情况下,没有大脑的最令人兴奋和最有前途的新建筑,变压器神经网络。在这项工作中,我们展示了变压器,当配备经常性位置编码时,复制海马形成的精确调整的空间表示;最典型的地方和网格细胞。此外,我们表明这一结果并不令人意外,因为它与来自神经科学的当前海马模型密切相关。我们亦显示变压器版本,并通过神经科学版本提供戏剧性的性能。这项工作继续绑定人工和脑网络的计算,提供了对海马 - 皮质交互的新颖理解,并提出了更宽的皮质区域可以如何执行超出当前语言理解的神经科学模型的复杂任务。
translated by 谷歌翻译
用于探索美国国家航空航天局的搜索工具(广告)可以相当丰富和赋予(例如,类似和趋势的运营商),但研究人员尚未允许完全杠杆语义搜索。例如,对“普朗克任务的结果”查询应该能够区分普朗克(人,任务,常量,机构和更多)的所有各种含义,而无需从用户进一步澄清。在广告中,我们正在将现代机器学习和自然语言处理技术应用于我们最近的天文出版物的数据集,以培训Astrobert,这是一种基于Google研究的深刻语境语言模型。使用AstrBert,我们的目标是丰富广告数据集并提高其可发现性,特别是我们正在开发自己的命名实体识别工具。我们在这里展示我们初步的结果和经验教训。
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译